Welcome

Let’s say you are not a (Java) developer, and you are interested in deploying your applications to the cloud. You know some Docker, you don’t know anything about Kubernetes (and don’t want to learn), but you have heard of Microsoft Azure. This workshop is for you.

This workshop offers attendees an intro-level, hands-on session with Azure Container Apps, from pushing containerized microservices, to deploying an entire infrastructure, to execute microservices and consuming them. But, what are we going to deploy to Azure Container Apps? Well, it’s going to be a set of microservices:

  • Developed with Quarkus (the microservices are already developed, and you don’t need to know Quarkus)

  • Exposing HTTP APIs

  • Exchanging events with Apache Kafka

  • Storing data in databases

  • With some parts of the dark side of microservices (resilience, health, monitoring)

  • Answer the ultimate question: are super-heroes stronger than super-villains?

This workshop is a BYOL (Bring Your Own Laptop) session, so bring your Windows, OSX, or Linux laptop, and be ready to install a few set of tools. What you are going to learn:

  • What is Azure Container Apps and what is Quarkus

  • How to build Docker images out of an existing Java application

  • How to execute the Docker images locally with Docker Compose

  • How to push the Docker images to Azure Registry

  • How to create managed services in Azure (Postgres database, Kafka)

  • How to deploy microservices to Azure Container Apps

  • How to configure Docker images in Azure Container Apps

  • How you improve the resilience of our microservices

  • And much more!

Ready? Here we go!

azure aca qrcode

Presenting the Workshop

This workshop should give you a practical introduction to Azure Container Apps. You will practice all the needed tools to deploy an entire microservice architecture, mixing classical HTTP, reactive and event-based microservices. You will finish by monitoring and scaling the microservices so they can handle the load.

The idea is that you leave this workshop with a good understanding of what Azure Container Apps is, what it is not, and how it can help you in your projects. Then, you’ll be prepared to investigate a bit more on your own.

What Will You Be Deploying?

In this workshop, you will deploy an existing application that allows superheroes to fight against supervillains. You will be containerizing and deploying several microservices communicating either synchronously via REST or asynchronously using Kafka:

  • Super Hero UI: an Angular application to pick up a random superhero, a random supervillain, and makes them fight. The Super Hero UI is exposed via Quarkus and invokes the Fight REST API.

  • Villain REST API: A classical HTTP microservice exposing CRUD operations on Villains, stored in a PostgreSQL database.

  • Hero REST API: A reactive HTTP microservice exposing CRUD operations on Heroes, stored in a Postgres database.

  • Fight REST API: This REST API invokes the Hero and Villain APIs to get a random superhero and supervillain. Each fight is, then, stored in a PostgreSQL database. This microservice can be developed using both the classical (imperative) or reactive approach. Invocations to the hero and villain services are protected using resilience patterns (retry, timeout, circuit-breakers).

  • Statistics: Each fight is asynchronously sent (via Kafka) to the Statistics microservice. It has an HTML + JQuery UI displaying all the statistics.

Diagram

The main UI allows you to pick up one random Hero and Villain by clicking on "New Fighters." Then it’s just a matter of clicking on "Fight!" to get them to fight. The table at the bottom shows the list of the previous fights.

angular ui

How Does This Workshop Work?

You have this material in your hands (either electronically or printed), and you can now follow it step by step. The structure of this workshop is as follow :

  • Installing all the needed tools: in this section, you will install all the tools and code to be able to bundle, package and deploy our application

  • Azure Container Apps and Quarkus: this section introduces Quarkus and Azure Container Apps to make sure you have all the needed vocabulary to follow along

  • Running the Application Locally: in this section, you will build Docker images out of Quarkus microservices, execute them locally with Docker compose, push them to Azure Registry, all that using Docker and Azure CLI

  • Running the Application on Azure Container Apps: in this section, you will create an entire infrastructure on Azure (Postgres databases, Kafka, etc.) and deploy the microservices to Azure Container Apps

  • Administrating: in this section, you will add some load to your microservices, monitor them, scale them, check the logs, etc.

If you already have the tools installed, skip the Installing all the needed tools section and jump to the sections Azure Container Apps and Quarkus or straight to Running the Application Locally (if you already know Quarkus and Azure Container Apps), and start hacking on the command lines. This "à la carte" mode lets you make the most of this 2 to 4-hour long hands-on lab.

What Do You Have to Do?

This workshop should be as self-explanatory as possible. So your job is to follow the instructions by yourself, do what you are supposed to do, and do not hesitate to ask for any clarification or assistance; that’s why the team is here.

Oh, and be ready to have some fun!

Software Requirements

First of all, make sure you have a 64bits computer with admin rights (so you can install all the needed tools) and at least 16Gb of RAM (as some tools need a few resources).

If you are using Mac OS X, make sure the version is greater than 10.11.x (Captain).

This workshop will make use of the following Software, tools, frameworks that you will need to install and now (more or less) how it works:

  • Azure CLI

  • Docker

  • cURL (or any other command line HTTP client)

  • (optionally) Any IDE you feel comfortable with (eg. Intellij IDEA, Eclipse IDE, VS Code..)

The following section focuses on how to install and set up the needed Software. You can skip the next section if you have already installed all the prerequisites.

This workshop assumes a bash shell. If you run on Windows, in particular, adjust the commands accordingly.

Installing Software

For this workshop you don’t need to have a Java environment setup on your machine. All the Java code will be build with multistage Dockerfile builds. So only Docker is needed to build, deploy and execute the application. As for deploying the infrastructure and the application to Azure, we will use Azure CLI.

WSL

Windows Subsystem for Linux (WSL) lets developers run a GNU/Linux environment — including most command-line tools, utilities, and applications — directly on Windows, unmodified, without the overhead of a traditional virtual machine or dual-boot setup.

If you are using Windows, it is recommended to install WSL.

Installing WSL

You can install everything you need to run Windows Subsystem for Linux (WSL) by entering this command in an administrator PowerShell or Windows Command Prompt and then restarting your machine:

wsl --install

This command will enable the required optional components, download the latest Linux kernel, set WSL 2 as your default, and install a Linux distribution for you (Ubuntu by default).

The first time you launch a newly installed Linux distribution, a console window will open and you’ll be asked to wait for files to de-compress and be stored on your machine. All future launches should take less than a second.

Azure CLI

The Azure command-line interface (Azure CLI) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. For this workshop you need to have Azure CLI installed locally on your machine.

Installing Azure CLI

The Azure CLI is available to install in Windows, macOS and Linux environments. It can also be run in a Docker container and Azure Cloud Shell. On Mac OS X, the easiest way to install Azure CLI is by executing the following command:

brew install azure-cli
Checking for Azure CLI Installation

Once the installation is complete, you can execute a few commands to be sure the Azure CLI is correctly installed:

az version
az --version
Some Azure CLI Commands

Azure CLI is a command-line utility where you can use several parameters and options to create, query, or delete Azure resources. To get some help on the commands, you can type:

az help
az --help
az vm --help
az vm disk --help
az vm disk attach --help

Docker

Instead of Docker, you can use Rancher Desktop which provide the same capabilities but also lets you run Kubernetes locally.
If you plan to deploy your applications to Kubernetes, we recommend Rancher Desktop.

Instructions to install Rancher Desktop are available in <<introduction-installing-rancher>>.

Docker is a set of utilities that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their software, libraries, and configuration files; they can communicate with each other through well-defined channels.

Installing Docker

Our infrastructure will use Docker to ease the installation of the different technical services (database, monitoring…​). So for this, we need to install docker and docker compose Installation instructions are available on the following page:

On Linux, don’t forget the post-execution steps described on https://docs.docker.com/install/linux/linux-postinstall/.

If you are on the latest versions of Fedora, you might prefer podman (https://podman.io/) instead of docker. To install podman and podman-compose on Fedora please follow the instructions at https://fedoramagazine.org/manage-containers-with-podman-compose/. You will also need to configure testcontainers library used in Dev Services later in the workshop like mentioned in https://quarkus.io/blog/quarkus-devservices-testcontainers-podman/#tldr.
Checking for Docker Installation

Once installed, check that both docker and docker compose are available in your PATH:

$ docker version
Docker version 20.10.8, build 3967b7d
Cloud integration: v1.0.24
Version:           20.10.14
API version:       1.41

$ docker compose version
Docker Compose version v2.5.0

Finally, run your first container as follows:

$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/
Some Docker Commands

Docker is a command-line utility where you can use several parameters and options to start/stop a container. You invoke docker with zero, one, or several command-line options with the container or image ID you want to work with. Docker comes with several options that are described in the documentation if you need more help.[1] To get some help on the commands and options, you can type, use the following command:

$ docker help

Usage:  docker [OPTIONS] COMMAND

$ docker help attach

Usage:  docker attach [OPTIONS] CONTAINER

Attach local standard input, output, and error streams to a running container

Here are some commands that you will be using to start/stop containers in this workshop.

  • docker container ls: Lists containers.

  • docker container start CONTAINER: Starts one or more stopped containers.

  • docker compose -f docker-compose.yaml up -d: Starts all containers defined in a Docker Compose file.

  • docker compose -f docker-compose.yaml down: Stops all containers defined in a Docker Compose file.

cURL

To invoke the REST Web Services described in this workshop, we often use cURL.[2] cURL is a command-line tool and library to do reliable data transfers with various protocols, including HTTP. It is free, open-source (available under the MIT Licence), and has been ported to several operating systems.

Installing cURL

If you are on Mac OS X and have installed Homebrew, then installing cURL is just a matter of a single command.[3] Open your terminal and install cURL with the following command:

brew install curl

For Windows, download and install curl from https://curl.se/download.html.

Checking for cURL Installation

Once installed, check for cURL by running curl --version in the terminal. It should display cURL version:

$ curl --version
curl 7.64.1 (x86_64-apple-darwin20.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.41.0
Release-Date: 2019-03-27
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS GSS-API HTTP2 HTTPS-proxy IPv6 Kerberos Largefile libz MultiSSL NTLM NTLM_WB SPNEGO SSL UnixSockets
Some cURL Commands

cURL is a command-line utility where you can use several parameters and options to invoke URLs. You invoke curl with zero, one, or several command-line options to accompany the URL (or set of URLs) you want the transfer to be about. cURL supports over two hundred different options, and I would recommend reading the documentation for more help.[4] To get some help on the commands and options, you can type, use the following command:

$ curl --help

Usage: curl [options...] <url>

You can also opt to use curl --manual, which will output the entire man page for cURL plus an appended tutorial for the most common use cases.

Here are some commands you will use to invoke the RESTful web service examples in this workshop.

Formatting the cURL JSON Output

Very often, when using cURL to invoke a RESTful web service, we get some JSON payload. cURL does not format this JSON so that you will get a flat String such as:

curl http://localhost:8083/api/heroes
[{"id":"1","name":"Chewbacca","level":"14"},{"id":"2","name":"Wonder Woman","level":"15"},{"id":"3","name":"Anakin Skywalker","level":"8"}]

But what we want is to format the JSON payload, so it is easier to read. For that, there is a neat utility tool called jq that we could use. jq is a tool for processing JSON inputs, applying the given filter to its JSON text inputs, and producing the filter’s results as JSON on standard output.[5] You can install it on Mac OSX with a simple brew install jq. Once installed, it’s just a matter of piping the cURL output to jq like this:

curl http://localhost:8083/api/heroes | jq

[
  {
    "id": "1",
    "name": "Chewbacca",
    "lastName": "14"
  },
  {
    "id": "2",
    "name": "Wonder Woman",
    "lastName": "15"
  },
  {
    "id": "3",
    "name": "Anakin Skywalker",
    "lastName": "8"
  }
]

Git

Git⁠[6] is a free and open source distributed version control system designed for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source code management in software development, but it can be used to keep track of changes in any set of files. Git was created by Linus Torvalds in 2005 for the development of the Linux kernel, with other kernel developers contributing to its initial development.

Installing Git

On Mac, if you have installed Homebrew, then installing Git is just a matter of a single command. Open your terminal and install Git with the following command:

$ brew install git
Checking for Git Installation

Once installed, check for Git by running git --version in the terminal. It should display the git version:

$ git --version

Recap

Just make sure the following commands work on your machine.

$ az --version
$ docker version
$ docker compose version
$ curl --version
$ git --version

Preparing for the Workshop

This workshop needs internet to access Azure of course, but also to download all sorts of Maven artifacts, Docker images, and even pictures. Some of these artifacts are large, and because we have to share internet connexions at the workshop, it is better to download them before the workshop. Here are a few commands that you can execute before the workshop.

Clone the GitHub repository of the application

Call to action

First, clone the GitHub repository of the Super Heroes application located at https://github.com/quarkusio/quarkus-workshops by executing the following command:

git clone https://github.com/quarkusio/quarkus-workshops.git --depth 1

The code of this Super Heroes application is separated into two different directories:

Diagram

Under the super-heroes directory you will find the entire Super Hero application spread throughout a set of subdirectories, each one containing a microservice or some tooling. The final structure will be the following:

Diagram

Checking Ports

During this workshop, we will use several ports.

Call to action

Just make sure the following ports are free, so you don’t run into any conflicts.

lsof -i tcp:8080    # UI
lsof -i tcp:8082    # Fight REST API
lsof -i tcp:8084    # Villain REST API
lsof -i tcp:8083    # Hero REST API
lsof -i tcp:5432    # Postgres
lsof -i tcp:9090    # Prometheus
lsof -i tcp:2181    # Zookeeper
lsof -i tcp:9092    # Kafka

Warming up Docker images

To warm up your Docker image repository, navigate to the quarkus-workshop-super-heroes/super-heroes/infrastructure directory. Here, you will find a docker-compose.yaml/docker-compose-linux.yaml files which define all the needed Docker images. Notice that there is a db-init directory with an initialize-databases.sql script which sets up our databases, and a monitoring directory (all that will be explained later).

Call to action

Then execute the following command which will download all the Docker images and start the containers:

docker compose -f docker-compose.yaml up -d
Linux Users beware

If you are on Linux, use docker-compose-linux.yaml instead of docker-compose.yaml. This Linux specific file will allow Prometheus to fetch metrics from the services running on the host machine.

If you have an issue creating the roles for the database with the initialize-databases.sql file, you have to execute the following commands:

docker exec -it --user postgres super-database psql -c "CREATE ROLE superman LOGIN PASSWORD 'superman' NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION"
docker exec -it --user postgres super-database psql -c "CREATE ROLE superbad LOGIN PASSWORD 'superbad' NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION"
docker exec -it --user postgres super-database psql -c "CREATE ROLE superfight LOGIN PASSWORD 'superfight' NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION"

After this, verify the containers are running using the following command:

docker compose -f docker-compose.yaml ps

The output should resemble something like this:

     Name                   Command               State           Ports
--------------------------------------------------------------------------------
kafka            sh -c bin/kafka-server-sta ...   Up      0.0.0.0:9092->9092/tcp
super-database   docker-entrypoint.sh postgres    Up      0.0.0.0:5432->5432/tcp
super-visor      /bin/prometheus --config.f ...   Up      0.0.0.0:9090->9090/tcp
zookeeper        sh -c bin/zookeeper-server ...   Up      0.0.0.0:2181->2181/tcp

Once all the containers are up and running, you can shut them down and remove their volumes with the commands:

docker compose -f docker-compose.yaml down
docker compose -f docker-compose.yaml rm
What’s this infra?

Any microservice system is going to rely on a set of technical services. In our context, we are going to use PostgreSQL as the database, Prometheus as the monitoring tool, and Kafka as the event/message bus. This infrastructure starts all these services, so you don’t have to worry about them.

This infra will only be used when we run our services in prod mode. In dev mode, Quarkus will start everything for us.

Setting Up Azure

To be able to deploy the application to Azure, you first need an Azure subscription. If you don’t have one, go to https://signup.azure.com and register.

Call to action

Then, sign in to Azure from the CLI:

az login

Make sure you are using the right subscription with:

az account show
Setting Up the Azure Environment

Azure CLI is extensible and you can install as many extensions as you need.

Call to action

For this workshop, install the Azure Container Apps and Database extensions for the Azure CLI:

az extension add --name containerapp  --upgrade
az extension add --name rdbms-connect --upgrade
az extension add --name log-analytics --upgrade

You can then check the extensions that are installed in your system with the following command:

az extension list

You should see the extensions that have been installed:

If you are on Windows and using WSL, the extensions should be installed under /home/<user>/.azure/cliextensions/. If it’s not the case, and instead extensions are installed under C:\\Users\\<user>\\.azure\\cliextensions, you should reboot your computer.

[
  ...
  {
    "experimental": false,
    "extensionType": "whl",
    "name": "containerapp",
    "path": "/Users/agoncal/.azure/cliextensions/containerapp",
    "preview": true,
    "version": "0.3.4"
  },
  ...
]

Then, register the needed Azure namespaces:

az provider register --namespace Microsoft.App
az provider register --namespace Microsoft.OperationalInsights

Creating Azure Resources

Before creating the infrastructure for the Super Heroes application and deploying the microservices to Azure Container Apps, we need to create some Azure resources.

Setting Up the Azure environment variables

Let’s first set a few environment variables that will help us in creating the Azure infrastructure.

Call to action

Set the following variables:

RESOURCE_GROUP="super-heroes"
LOCATION="eastus2"
TAG="super-heroes"
LOG_ANALYTICS_WORKSPACE="super-heroes-logs"
UNIQUE_IDENTIFIER=$(whoami)
REGISTRY="superheroesregistry"$UNIQUE_IDENTIFIER
IMAGES_TAG="1.0"

Now let’s create the Azure resources.

Resource Group

A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. In our workshop, all the databases, all the microservices, etc. will be grouped into a single resource group.

Call to action

Execute the following command to create the Super Hero resource group:

az group create \
  --name "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG"
Log Analytics Workspace

Log Analytics workspace is the environment for Azure Monitor log data. Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a particular workspace. We will use the same workspace for most of the Azure resources we will be creating.

Call to action

Create a Log Analytics workspace with the following command:

az monitor log-analytics workspace create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG" \
  --workspace-name "$LOG_ANALYTICS_WORKSPACE"

Let’s also retrieve the Log Analytics Client ID and client secret and store them in environment variables:

LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show  \
  --resource-group "$RESOURCE_GROUP" \
  --workspace-name "$LOG_ANALYTICS_WORKSPACE" \
  --query customerId  \
  --output tsv | tr -d '[:space:]'`

echo $LOG_ANALYTICS_WORKSPACE_CLIENT_ID

LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=`az monitor log-analytics workspace get-shared-keys \
  --resource-group "$RESOURCE_GROUP" \
  --workspace-name "$LOG_ANALYTICS_WORKSPACE" \
  --query primarySharedKey \
  --output tsv | tr -d '[:space:]'`

echo $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET
Azure Container Registry

In the next chapters we will be creating Docker containers and pushing them to the Azure Container Registry. Azure Container Registry is a private registry for hosting container images. Using the Azure Container Registry, you can store Docker-formatted images for all types of container deployments.

Call to action

First, let’s created an Azure Container Registry with the following command:

az acr create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG" \
  --name "$REGISTRY" \
  --workspace "$LOG_ANALYTICS_WORKSPACE" \
  --sku Standard \
  --admin-enabled true

Update the repository to allow anonymous users to pull the images:

az acr update \
  --resource-group "$RESOURCE_GROUP" \
  --name "$REGISTRY" \
  --anonymous-pull-enabled true

Get the URL of the Azure Container Registry and set it to the REGISTRY_URL variable with the following command:

REGISTRY_URL=$(az acr show \
  --resource-group "$RESOURCE_GROUP" \
  --name "$REGISTRY" \
  --query "loginServer" \
  --output tsv)

If you log into the Azure Portal you should see the following created resources.

azure portal 2

Setting Up the Environment Variables

The Super Heroes environment and application will be created and deployed using a set of Azure CLI commands. Each of these commands need a set of parameters, and some of these parameters can be set of environment variables.

Call to action

Set the following environment variables, there will be needed in the Azure CLI commands:

# Container Apps
CONTAINERAPPS_ENVIRONMENT="super-heroes-env"

# Postgres
POSTGRES_DB_ADMIN="superheroesadmin"
POSTGRES_DB_PWD="super-heroes-p#ssw0rd-12046"
POSTGRES_DB_VERSION="13"
POSTGRES_SKU="Standard_D2s_v3"
POSTGRES_TIER="GeneralPurpose"

# Kafka
KAFKA_NAMESPACE="fights-kafka-$UNIQUE_IDENTIFIER"
KAFKA_TOPIC="fights"
KAFKA_BOOTSTRAP_SERVERS="$KAFKA_NAMESPACE.servicebus.windows.net:9093"

# Heroes
HEROES_APP="heroes-app"
HEROES_DB="heroes-db-$UNIQUE_IDENTIFIER"
HEROES_IMAGE="${REGISTRY_URL}/${HEROES_APP}:${IMAGES_TAG}"
HEROES_DB_SCHEMA="heroes"
HEROES_DB_CONNECT_STRING="postgresql://${HEROES_DB}.postgres.database.azure.com:5432/${HEROES_DB_SCHEMA}?ssl=true&sslmode=require"

# Villains
VILLAINS_APP="villains-app"
VILLAINS_DB="villains-db-$UNIQUE_IDENTIFIER"
VILLAINS_IMAGE="${REGISTRY_URL}/${VILLAINS_APP}:${IMAGES_TAG}"
VILLAINS_DB_SCHEMA="villains"
VILLAINS_DB_CONNECT_STRING="jdbc:postgresql://${VILLAINS_DB}.postgres.database.azure.com:5432/${VILLAINS_DB_SCHEMA}?ssl=true&sslmode=require"

# Fights
FIGHTS_APP="fights-app"
FIGHTS_DB="fights-db-$UNIQUE_IDENTIFIER"
FIGHTS_IMAGE="${REGISTRY_URL}/${FIGHTS_APP}:${IMAGES_TAG}"
FIGHTS_DB_SCHEMA="fights"
FIGHTS_DB_CONNECT_STRING="jdbc:postgresql://${FIGHTS_DB}.postgres.database.azure.com:5432/${FIGHTS_DB_SCHEMA}?ssl=true&sslmode=require"

# Statistics
STATISTICS_APP="statistics-app"
STATISTICS_IMAGE="${REGISTRY_URL}/${STATISTICS_APP}:${IMAGES_TAG}"

# UI
UI_APP="super-heroes-ui"
UI_IMAGE="${REGISTRY_URL}/${UI_APP}:${IMAGES_TAG}"

Ready?

After the prerequisites have been installed and the different components have been warmed up, and some Azure resource created, it’s now time to deploy some code! But before, let us introduce Azure Container Apps and Quarkus.

Azure Container Apps and Quarkus


Even if you won’t be developing any Quarkus code, you might want to discover why this Java runtime is perfect for developing containerized applications, and why it makes with Azure Container Apps. And if you don’t know Azure Container Apps, this chapter is the right time to take a breath and discover this Azure service. In this chapter, you are going to see:

  • What’s Azure Container Apps?

  • How does Azure Container Apps fit in the long list of Azure services?

  • Pros and cons of using Azure Container Apps

  • What’s Quarkus? and how does it change the Java landscape

  • What are the main Quarkus idea, and how it helps in the cloud native world

What’s Azure Container Apps?

Azure Container Apps is a fully managed serverless container service on Azure. It allows you to run containerized applications without worrying about orchestration or managing complex infrastructure such as Kubernetes. You write code using your preferred programming language or framework (in this workshop it’s Java and Quarkus, but it can be anything), and build microservices with full support for Distributed Application Runtime (Dapr). Then, your containers will scale dynamically based on HTTP traffic or events powered by Kubernetes Event-Driven Autoscaling (KEDA).

There are already a few compute resources on Azure: from IAAS to FAAS. Azure Container Apps sits between PAAS and FAAS. On one hand, it feels more PaaS, because you are not forced into a specific programming model and you can control the rules on which to scale out / scale in. On the other hand, it has quite some serverless characteristics like scaling to zero, event-driven scaling, per second pricing and the ability to leverage Dapr’s event-based bindings.

azure aca compute

Container Apps is built on top of Azure Kubernetes Service, including a deep integration with KEDA (event-driven auto scaling for Kubernetes), Dapr (distributed application runtime) and Envoy (a service proxy designed for cloud-native applications). The underlying complexity is completely abstracted for the end-user. So no need to configure your K8S service, deployment, ingress, volume manifests… You get a very simple API and user interface to configure the desired configuration for your containerized application. This simplification means also less control, hence the difference with AKS.

azure aca intro

Azure Container Apps has the following features:

  • Revisions: automatic versioning that helps to manage the application lifecycle of your container apps

  • Traffic control: split incoming HTTP traffic across multiple revisions for Blue/Green deployments and A/B testing

  • Ingress: simple HTTPS ingress configuration, without the need to worry about DNS and certificates

  • Autoscaling: leverage all KEDA-supported scale triggers to scale your app based on external metrics

  • Secrets: deploy secrets that are securely shared between containers, scale rules and Dapr sidecars

  • Monitoring: the standard output and error streams are automatically written to Log Analytics

  • Dapr: through a simple flag, you can enable native Dapr integration for your Container Apps

Azure Container Apps introduce the following concepts:

  • Environment: this is a secure boundary around a group of Container Apps. They are deployed in the same virtual network, these apps can easily intercommunicate easily with each other and they write logs to the same Log Analytics workspace. An environment can be compared with a Kubernetes namespace.

  • Container App: this is a group of containers (pod) that is deployed and scale together. They share the same disk space and network.

  • Revision: this is an immutable snapshot of a Container App. New revisions are automatically created and are valuable for HTTP traffic redirection strategies, such as A/B testing.

azure aca concepts

What’s Quarkus?

Java was born more than 25 years ago. The world 25 years ago was quite different. The software industry has gone through several revolutions over these two decades. Java has always been able to reinvent itself to stay relevant.

But a new revolution is happening. While for years, most applications were running on huge machines, with lots of CPU and memory, they are now running on the Cloud, in constrained environments, in containers, where the resources are shared. Density is the new optimization: crank as many mini-apps (or microservices) as possible per node. And scale by adding more instances of an app instead of a more powerful single instance.

The Java ergonomics, designed 20 years ago, do not fit well in this new environment. Java applications were designed to run 24/7 for months, even years. The JIT is optimizing the execution over time; the GC manages the memory efficiently…​ But all these features have a cost, and the memory required to run Java applications and startup times are showstoppers when you deploy 20 or 50 microservices instead of one application. The issue is not the JVM itself; it’s also the Java ecosystem that needs to be reinvented.

That’s where Quarkus, and other projects, enter the game. Quarkus uses a build time principle.[7] During the build of the application, tasks that usually happen at runtime are executed at build time.

quarkus build time principle

Thus, when the application runs, everything has been pre-computed, and all the annotation scanning, XML parsing, and so on won’t be executed anymore. It has two direct benefits: startup time (a lot faster) and memory consumption (a lot lower).

quarkus augmentation

So, as depicted in the figure above, Quarkus does bring an infrastructure for frameworks to embrace build time metadata discovery (like annotations), replace proxies with generated classes, pre-configure most frameworks, and handle dependency injection at build time.

Also, during the build, Quarkus detects which class needs to be accessed by reflection at runtime, boots framework at build time to record the result, and generally offers a lot of GraalVM optimization for free (or cheap at least). Indeed, thanks to all this metadata, Quarkus can configure native compilers such as the GraalVM compiler to generate a native executable for your Java application. Thanks to an aggressive dead-code elimination, the final executable is smaller, faster to start, and uses a ridiculously small amount of memory.

quarkus native compilation

Running the Application Locally


In this chapter we will build containers out of our Quarkus microservices and execute them locally thanks to Docker Compose. Then, we will push the containers to Azure Container Registry so we can still run them locally, and later on Azure Container Apps.

Building containers locally and pushing them to Azure Container Registry can take from 15 to 30 minutes (depending on your CPU and bandwith). If you think you will not have enough time to complete the Workshop, you can skip the next optional sections and go straight to Running remote containers.

(Optional) Building containers

In this section, we are going to package our microservices into containers. In particular, we are going to produce Linux 64 bits native executables and runs them in a container. The native compilation uses the OS and architecture of the host system.

And…​ Linux Containers are …​ Linux. So to build a container with a Linux native executable (even if you are on Windows or Mac), Quarkus comes with a trick to produce these executable. First, Quarkus comes with a set of Dockerfiles. The Dockerfile.native file is for running the application in native mode. It looks like this:

FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5
WORKDIR /work/
RUN chown 1001 /work \
    && chmod "g+rwX" /work \
    && chown 1001:root /work
COPY --chown=1001:root target/*-runner /work/application

EXPOSE 8080
USER 1001

CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]

It’s a pretty straightforward Dockerfile taking a minimal base image and copying the generated native executable.

Building a native executable takes time, CPU, and memory. It’s even more accurate in the container. So, first, be sure that your container system has enough memory to build the executable. It requires at least 6Gb of memory, 8Gb is recommended.

Call to action

Execute the following commands to build all our microservices and the UI. This will build Docker images out of Linux native executables: Under the quarkus-workshop-super-heroes/super-heroes directory execute the following Docker commands:

cd rest-heroes
docker build -f src/main/docker/Dockerfile.build-native -t quarkus/rest-heroes .
cd ..

cd rest-villains
docker build -f src/main/docker/Dockerfile.build-native -t quarkus/rest-villains ../..
cd ..

cd rest-fights
docker build -f src/main/docker/Dockerfile.build-native -t quarkus/rest-fights .
cd ..

cd event-statistics
docker build -f src/main/docker/Dockerfile.build-native -t quarkus/event-statistics .
cd ..

cd ui-super-heroes
docker build -f src/main/docker/Dockerfile.build-native -t quarkus/ui-super-heroes .
cd ..

Check that you have all the Docker images installed locally:

docker image ls | grep quarkus

The output should look like this:

quarkus/ui-super-heroes         latest      266MB
quarkus/event-statistics        latest      145MB
quarkus/rest-fights             latest      198MB
quarkus/rest-villains           latest      158MB
quarkus/rest-heroes             latest      154MB

(Optional) Running local containers

Now that we have all our Docker containers created, let’s execute them all to be sure that everything is working.

Call to action

Under super-heroes/infrastructure you will find the docker-compose-app-local.yaml file. It declares all the needed infrastructure (databases, Kafka) as well as our microservices. Execute it with:

docker compose -f docker-compose-app-local.yaml up

To know that all your containers are started, you can use the following command:

docker compose -f docker-compose-app-local.yaml ps

You should get something similar to the following list. Make sure all your containers are in running status:

event-statistics    "./application -Dqua…"   running
kafka               "sh -c 'export CLUST…"   running
rest-fights         "./application -Dqua…"   running
rest-heroes         "./application -Dqua…"   running
rest-villains       "./application -Dqua…"   running
super-database      "docker-entrypoint.s…"   running (healthy)
ui-super-heroes     "/bin/sh -c 'npm sta…"   running

Once all the containers are started, you can:

Then, make sure you shut down the entire application with:

docker compose -f docker-compose-app-local.yaml down

(Optional) Pushing containers to Azure Container Registry

Now that we have all our Docker containers running locally, let’s push them to Azure Container Registry.

Make sure you’ve set all the environment variables defined in the previous chapter and that you’ve also created the resource group and the Azure Container Registry.

Call to action

Before you can push an image to your registry, you must tag it with the fully qualified name of your registry login server (the REGISTRY_URL variable). Tag the image using the docker tag commands:

docker tag quarkus/ui-super-heroes:latest   $UI_IMAGE
docker tag quarkus/event-statistics:latest  $STATISTICS_IMAGE
docker tag quarkus/rest-fights:latest       $FIGHTS_IMAGE
docker tag quarkus/rest-villains:latest     $VILLAINS_IMAGE
docker tag quarkus/rest-heroes:latest       $HEROES_IMAGE

Call to action

To be able to push these Docker images to Azure Registry, we first need to log in to the registry:

az acr login \
  --name "$REGISTRY"

You should see the prompt Login Succeeded.

Then, push all the images with the following commands:

docker push $UI_IMAGE
docker push $STATISTICS_IMAGE
docker push $FIGHTS_IMAGE
docker push $VILLAINS_IMAGE
docker push $HEROES_IMAGE

You can check that the images have been pushed to Azure Container Registry by executing the following command:

az acr repository list \
  --name "$REGISTRY" \
  --output table

You can also get some information on a particular repository or image if needed:

az acr repository show \
  --name "$REGISTRY" \
  --repository "$HEROES_APP"

You can visualize the content of the registry on the Azure Portal.

azure portal 4

Running remote containers

Call to action

If you haven’t skipped the previous optional sections and built the containers your self, you should edit the docker-compose-app-remote.yaml file under super-heroes/infrastructure and change the name superheroesregistry with the value of the $REGISTRY variable.

Call to action

Now that we have all our Docker containers pushed to Azure Container Registry, let’s execute them with:

docker compose -f docker-compose-app-remote.yaml up

Once all the containers are started, you can:

On http://localhost:8080 you should see the user interface, and you should be able to fight super heroes against super villains:

angular ui

On http://localhost:8085 you should see the statistics of the fights. When super heroes and super heroes are fights, the statistics shows which one has won the most fights, and the percentage of fights won by the two groups. The UI is automatically updated at each fight:

angular ui stats

You should see the user interface and everything should work. Remember to shutdown the entire application with:

docker compose -f docker-compose-app-remote.yaml down

Ok, enough running these containers locally! In the next chapter we will take these remote containers, configure them, and make them work remotely on Azure Container Apps.

Running the Application on Azure Container Apps


Now that we have containerized our application, push it to Azure Container Registry and execute it locally, time to execute it on Azure Container Apps. In this chapter we will create the needed infrastructure in Azure Container Apps (database, Kafka, etc.) and then deploy the containers so we can execute our application.

Deploying the Infrastructure

Before deploying our microservices to Azure Container Apps, we need to create the infrastructure.

Create a Container Apps environment

Call to action

First, let’s create the container apps environment. A container apps environment acts as a boundary for our containers. Containers deployed on the same environment use the same virtual network and the same Log Analytics workspace. Create the container apps environment with the following command:

az containerapp env create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG" \
  --name "$CONTAINERAPPS_ENVIRONMENT" \
  --logs-workspace-id "$LOG_ANALYTICS_WORKSPACE_CLIENT_ID" \
  --logs-workspace-key "$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"

Create the managed Postgres Databases

Call to action

We need to create three PostgreSQL databases so the Heroes, Villains and Fights microservice can store data. Because we also want to access these database from external SQL client, we make them available to the outside world thanks to the -public all parameter. Create the databases with the following commands:

az postgres flexible-server create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG" application="$HEROES_APP" \
  --name "$HEROES_DB" \
  --admin-user "$POSTGRES_DB_ADMIN" \
  --admin-password "$POSTGRES_DB_PWD" \
  --public all \
  --sku-name "$POSTGRES_SKU" \
  --storage-size 4096 \
  --version "$POSTGRES_DB_VERSION"
az postgres flexible-server create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG" application="$VILLAINS_APP" \
  --name "$VILLAINS_DB" \
  --admin-user "$POSTGRES_DB_ADMIN" \
  --admin-password "$POSTGRES_DB_PWD" \
  --public all \
  --sku-name "$POSTGRES_SKU" \
  --storage-size 4096 \
  --version "$POSTGRES_DB_VERSION"
az postgres flexible-server create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG" application="$FIGHTS_APP" \
  --name "$FIGHTS_DB" \
  --admin-user "$POSTGRES_DB_ADMIN" \
  --admin-password "$POSTGRES_DB_PWD" \
  --public all \
  --sku-name "$POSTGRES_SKU" \
  --storage-size 4096 \
  --version "$POSTGRES_DB_VERSION"

Call to action

Then, we create the database schemas, one for each database:

az postgres flexible-server db create \
    --resource-group "$RESOURCE_GROUP" \
    --server-name "$HEROES_DB" \
    --database-name "$HEROES_DB_SCHEMA"
az postgres flexible-server db create \
    --resource-group "$RESOURCE_GROUP" \
    --server-name "$VILLAINS_DB" \
    --database-name "$VILLAINS_DB_SCHEMA"
az postgres flexible-server db create \
    --resource-group "$RESOURCE_GROUP" \
    --server-name "$FIGHTS_DB" \
    --database-name "$FIGHTS_DB_SCHEMA"

Call to action

Now that we have all our databases setup, time to create the tables and add some data to them. Each microservice comes with a set of database initialization files as well as some insert statements. Thanks to Azure CLI we can execute these SQL scripts. Create the tables using the following commands (make sure you are under quarkus-workshop-super-heroes/super-heroes to execute them):

az postgres flexible-server execute \
    --name "$HEROES_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$HEROES_DB_SCHEMA" \
    --file-path "infrastructure/db-init/initialize-tables-heroes.sql"

If you get the error No module named 'psycopg2._psycopg' that means that some of your Azure CLI dependencies are not correctly installed. Check https://github.com/Azure/azure-cli/issues/21998 for help.

az postgres flexible-server execute \
    --name "$VILLAINS_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$VILLAINS_DB_SCHEMA" \
    --file-path "infrastructure/db-init/initialize-tables-villains.sql"
az postgres flexible-server execute \
    --name "$FIGHTS_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$FIGHTS_DB_SCHEMA" \
    --file-path "infrastructure/db-init/initialize-tables-fights.sql"

Call to action

Now, let’s add some super heroes and super villains to these databases:

az postgres flexible-server execute \
    --name "$HEROES_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$HEROES_DB_SCHEMA" \
    --file-path "rest-heroes/src/main/resources/import.sql"
az postgres flexible-server execute \
    --name "$VILLAINS_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$VILLAINS_DB_SCHEMA" \
    --file-path "rest-villains/src/main/resources/import.sql"
az postgres flexible-server execute \
    --name "$FIGHTS_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$FIGHTS_DB_SCHEMA" \
    --file-path "rest-fights/src/main/resources/import.sql"

You can check the content of the tables with the following commands:

az postgres flexible-server execute \
    --name "$HEROES_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$HEROES_DB_SCHEMA" \
    --querytext "select * from hero"
az postgres flexible-server execute \
    --name "$VILLAINS_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$VILLAINS_DB_SCHEMA" \
    --querytext "select * from villain"
az postgres flexible-server execute \
    --name "$FIGHTS_DB" \
    --admin-user "$POSTGRES_DB_ADMIN" \
    --admin-password "$POSTGRES_DB_PWD" \
    --database-name "$FIGHTS_DB_SCHEMA" \
    --querytext "select * from fight"

Create the Managed Kafka

The Fight microservice communicates with the Statistics microservice through Kafka. We need to create an Azure event hub for that.

Call to action

az eventhubs namespace create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --tags system="$TAG" application="$FIGHTS_APP" \
  --name "$KAFKA_NAMESPACE"

Then, create the Kafka topic where the messages will be sent to and consumed from:

az eventhubs eventhub create \
  --resource-group "$RESOURCE_GROUP" \
  --name "$KAFKA_TOPIC" \
  --namespace-name "$KAFKA_NAMESPACE"

To configure Kafka in the Fight and Statistics microservices, get the connection string with the following commands:

KAFKA_CONNECTION_STRING=$(az eventhubs namespace authorization-rule keys list \
  --resource-group "$RESOURCE_GROUP" \
  --namespace-name "$KAFKA_NAMESPACE" \
  --name RootManageSharedAccessKey \
  --output json | jq -r .primaryConnectionString)

JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="'
KAFKA_JAAS_CONFIG="${JAAS_CONFIG}${KAFKA_CONNECTION_STRING}\";"

echo $KAFKA_CONNECTION_STRING
echo $KAFKA_JAAS_CONFIG

If you log into the Azure Portal you should see the following created resources.

azure portal 3

Deploying the Applications

Now that the Azure Container Apps environment is all set, we need to deploy our microservices to Azure Container Apps. So let’s create an instance of Container Apps for each of our microservices and User Interface.

If you haven’t built the containers and push them to your own Azure Container Registry, you need to change some environment variables. That means, that instead of having REGISTRY="superheroesregistry"$UNIQUE_IDENTIFIER, you need to change it to REGISTRY="superheroesregistry (so it uses the common registry).

REGISTRY="superheroesregistry"

REGISTRY_URL=$(az acr show \
  --resource-group "$RESOURCE_GROUP" \
  --name "$REGISTRY" \
  --query "loginServer" \
  --output tsv)

HEROES_IMAGE="${REGISTRY_URL}/${HEROES_APP}:${IMAGES_TAG}"
VILLAINS_IMAGE="${REGISTRY_URL}/${VILLAINS_APP}:${IMAGES_TAG}"
FIGHTS_IMAGE="${REGISTRY_URL}/${FIGHTS_APP}:${IMAGES_TAG}"
STATISTICS_IMAGE="${REGISTRY_URL}/${STATISTICS_APP}:${IMAGES_TAG}"
UI_IMAGE="${REGISTRY_URL}/${UI_APP}:${IMAGES_TAG}"

Heroes Microservice

First, the Heroes microservice. The Heroes microservice needs to access the managed Postgres database. Therefore, we need to set the right properties using our environment variables. Notice that the Heroes microservice has a --min-replicas set to 0. That means it can scale down to zero if not used (more on that later).

Call to action

Create the Heroes container app with the following command:

az containerapp create \
  --resource-group "$RESOURCE_GROUP" \
  --tags system="$TAG" application="$HEROES_APP" \
  --image "$HEROES_IMAGE" \
  --name "$HEROES_APP" \
  --environment "$CONTAINERAPPS_ENVIRONMENT" \
  --ingress external \
  --target-port 8083 \
  --min-replicas 0 \
  --env-vars QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION=validate \
             QUARKUS_HIBERNATE_ORM_SQL_LOAD_SCRIPT=no-file \
             QUARKUS_DATASOURCE_USERNAME="$POSTGRES_DB_ADMIN" \
             QUARKUS_DATASOURCE_PASSWORD="$POSTGRES_DB_PWD" \
             QUARKUS_DATASOURCE_REACTIVE_URL="$HEROES_DB_CONNECT_STRING"

The following command sets the URL of the deployed application to the HEROES_URL variable:

HEROES_URL="https://$(az containerapp ingress show \
    --resource-group "$RESOURCE_GROUP" \
    --name "$HEROES_APP" \
    --output json | jq -r .fqdn)"

echo $HEROES_URL

You can now invoke the Hero microservice APIs with:

curl "$HEROES_URL/api/heroes/hello"
curl "$HEROES_URL/api/heroes" | jq

To access the logs of the Heroes microservice, you can write the following query:

az monitor log-analytics query \
  --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
  --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == '$HEROES_APP' | project ContainerAppName_s, Log_s, TimeGenerated " \
  --output table

You might have to wait to be able to get the logs. Log analytics can take some time to get initialized.

Villains Microservice

The Villain microservice also needs to access the managed Postgres database, so we need to set the right variables.

Call to action

Notice the minimum of replicas is also set to 0:

az containerapp create \
  --resource-group "$RESOURCE_GROUP" \
  --tags system="$TAG" application="$VILLAINS_APP" \
  --image "$VILLAINS_IMAGE" \
  --name "$VILLAINS_APP" \
  --environment "$CONTAINERAPPS_ENVIRONMENT" \
  --ingress external \
  --target-port 8084 \
  --min-replicas 0 \
  --env-vars QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION=validate \
             QUARKUS_HIBERNATE_ORM_SQL_LOAD_SCRIPT=no-file \
             QUARKUS_DATASOURCE_USERNAME="$POSTGRES_DB_ADMIN" \
             QUARKUS_DATASOURCE_PASSWORD="$POSTGRES_DB_PWD" \
             QUARKUS_DATASOURCE_JDBC_URL="$VILLAINS_DB_CONNECT_STRING"

The following command sets the URL of the deployed application to the VILLAINS_URL variable:

VILLAINS_URL="https://$(az containerapp ingress show \
    --resource-group "$RESOURCE_GROUP" \
    --name "$VILLAINS_APP" \
    --output json | jq -r .fqdn)"

echo $VILLAINS_URL

You can now invoke the Hero microservice APIs with:

curl "$VILLAINS_URL/api/villains/hello"
curl "$VILLAINS_URL/api/villains" | jq

To access the logs of the Villain microservice, you can write the following query:

az monitor log-analytics query \
  --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
  --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == '$VILLAINS_APP' | project ContainerAppName_s, Log_s, TimeGenerated " \
  --output table

Statistics Microservice

The Statistics microservice listens to a Kafka topics and consumes all the fights.

Call to action

Create the Statistics container application with the following command:

az containerapp create \
  --resource-group "$RESOURCE_GROUP" \
  --tags system="$TAG" application="$STATISTICS_APP" \
  --image "$STATISTICS_IMAGE" \
  --name "$STATISTICS_APP" \
  --environment "$CONTAINERAPPS_ENVIRONMENT" \
  --ingress external \
  --target-port 8085 \
  --min-replicas 0 \
  --env-vars KAFKA_BOOTSTRAP_SERVERS="$KAFKA_BOOTSTRAP_SERVERS" \
             KAFKA_SECURITY_PROTOCOL=SASL_SSL \
             KAFKA_SASL_MECHANISM=PLAIN \
             KAFKA_SASL_JAAS_CONFIG="$KAFKA_JAAS_CONFIG"

The following command sets the URL of the deployed application to the STATISTICS_URL variable:

STATISTICS_URL="https://$(az containerapp ingress show \
    --resource-group "$RESOURCE_GROUP" \
    --name "$STATISTICS_APP" \
    --output json | jq -r .fqdn)"

echo $STATISTICS_URL

You can now display the Statistics UI with:

open "$STATISTICS_URL"

To access the logs of the Statistics microservice, you can write the following query:

az monitor log-analytics query \
  --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
  --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == '$STATISTICS_APP' | project ContainerAppName_s, Log_s, TimeGenerated " \
  --output table

Fights Microservice

The Fight microservice invokes the Heroes and Villains microserivces, sends fight messages to a Kafka topics and stores the fights into a MongoDB database. We need to configure Kafka (same connection string as the one used by the Statistics microservice) as well as the Postgres database. As for the microservice invocations, you need to set the URLs of both Heroes and Villains microservices.

Call to action

Create the Fights container application with the following command:

az containerapp create \
  --resource-group "$RESOURCE_GROUP" \
  --tags system="$TAG" application="$FIGHTS_APP" \
  --image "$FIGHTS_IMAGE" \
  --name "$FIGHTS_APP" \
  --environment "$CONTAINERAPPS_ENVIRONMENT" \
  --ingress external \
  --target-port 8082 \
  --min-replicas 0 \
  --env-vars QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION=validate \
             QUARKUS_HIBERNATE_ORM_SQL_LOAD_SCRIPT=no-file \
             QUARKUS_DATASOURCE_USERNAME="$POSTGRES_DB_ADMIN" \
             QUARKUS_DATASOURCE_PASSWORD="$POSTGRES_DB_PWD" \
             QUARKUS_DATASOURCE_JDBC_URL="$FIGHTS_DB_CONNECT_STRING" \
             KAFKA_BOOTSTRAP_SERVERS="$KAFKA_BOOTSTRAP_SERVERS" \
             KAFKA_SECURITY_PROTOCOL=SASL_SSL \
             KAFKA_SASL_MECHANISM=PLAIN \
             KAFKA_SASL_JAAS_CONFIG="$KAFKA_JAAS_CONFIG" \
             IO_QUARKUS_WORKSHOP_SUPERHEROES_FIGHT_CLIENT_HEROPROXY_MP_REST_URL="$HEROES_URL" \
             IO_QUARKUS_WORKSHOP_SUPERHEROES_FIGHT_CLIENT_VILLAINPROXY_MP_REST_URL="$VILLAINS_URL"

The following command sets the URL of the deployed application to the FIGHTS_URL variable:

FIGHTS_URL="https://$(az containerapp ingress show \
    --resource-group "$RESOURCE_GROUP" \
    --name "$FIGHTS_APP" \
    --output json | jq -r .fqdn)"

echo $FIGHTS_URL

Use the following curl commands to access the Fight microservice. Remember that we’ve set the minimum replicas to 0. That means that pinging the Hero and Villain microservices might fallback (you will get a That means that pinging the Hero and Villain microservices might fallback (you will get a That means that pinging the Hero and Villain microservices might fallback (you will get a _Could not invoke the Villains microservice message). Execute several times the same curl commands so Azure Containers Apps has time to instantiate one replica and process the requests:

curl "$FIGHTS_URL/api/fights/hello"
curl "$FIGHTS_URL/api/fights" | jq
curl "$FIGHTS_URL/api/fights/randomfighters" | jq

To access the logs of the Fight microservice, you can write the following query:

az monitor log-analytics query \
  --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
  --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == '$FIGHTS_APP' | project ContainerAppName_s, Log_s, TimeGenerated " \
  --output table

Super Hero UI

Like for the previous microservices, we will be deploying the UI as Docker image as we did for the previous microservices. But we could have also deployed the Super Hero UI using Azure Static Webapps witch is suited for Angular applications. If you are interested in this approach, you can check Azure Static Webapps.

Call to action

For now, let’s continue with Azure Container Apps and deploy the UI as a Docker image with the following command:

az containerapp create \
  --resource-group "$RESOURCE_GROUP" \
  --tags system="$TAG" application="$UI_APP" \
  --image "$UI_IMAGE" \
  --name "$UI_APP" \
  --environment "$CONTAINERAPPS_ENVIRONMENT" \
  --ingress external \
  --target-port 8080 \
  --env-vars API_BASE_URL="$FIGHTS_URL"
UI_URL="https://$(az containerapp ingress show \
    --resource-group "$RESOURCE_GROUP" \
    --name "$UI_APP" \
    --output json | jq -r .fqdn)"

echo $UI_URL
open "$UI_URL"

Running the Application

Now that the entire infrastructure is created and the microservices deployed, you can use all the following commands to either, directly invoke the APIs, or use the user interfaces:

curl "$HEROES_URL/api/heroes" | jq
curl "$VILLAINS_URL/api/villains" | jq
curl "$FIGHTS_URL/api/fights/randomfighters" | jq
open "$STATISTICS_URL"
open "$UI_URL"

Clean Up Azure Resources

Do NOT forget to remove the Azure resources once you are done running the workshop.

az group delete \
  --name "$RESOURCE_GROUP"

Conclusion


This is the end of the Super Hero workshop. We hope you liked it, learnt a few things, and more importantly, will be able to take this knowledge back to your projects.

This workshop started making sure your development environment was ready to develop the entire application. Then, there was some brief terminology to help you in understanding some concepts around Quarkus. If you find it was too short and need more details on Quarkus, Microservices, MicroProfile, Cloud Native, or GraalVM, check the Quarkus website for more references.[8]

Then, we focused on developing several isolated microservices. Some written in pure JAX-RS (such as the Villain) others with Reactive JAX-RS and Reactive Hibernate (such as the Hero). These microservices return a data in JSON, validate data thanks to Bean Validation, store and retrieve data from a relational database with the help of JPA, Panache and JTA.

You then installed an already coded Angular application on another instance of Quarkus. At this stage, the Angular application couldn’t access the microservices because of CORS issues that we quickly fixed.

Then, we made the microservices communicate with each other in HTTP thanks to REST Client. But HTTP-related technologies usually use synchronous communication and therefore need to deal with invocation failure. With Fault Tolerance, it was just a matter of using a few annotations and we can get some fallback when the communication fails.

That’s also why we introduced Reactive Messaging with Kafka: so we don’t have a temporal coupling between the microservices.

Remember that you can find all the code for this fascicle at https://github.com/quarkusio/quarkus-workshops/tree/main/quarkus-workshop-super-heroes. If some parts were not clear enough, or if you found something missing, a bug, or you just want to leave a note or suggestion, please use the GitHub issue tracker at https://github.com/quarkusio/quarkus-workshops/issues.

References


1. Docker commands https://docs.docker.com/engine/reference/commandline/cli
2. cURL https://curl.haxx.se
3. Homebrew https://brew.sh
4. cURL commands https://ec.haxx.se/cmdline.html
5. jq https://stedolan.github.io/jq
6. Git https://git-scm.com
7. Also called Ahead-of-Time Compilation https://www.graalvm.org/docs/reference-manual/native-image
8. Quarkus https://quarkus.io